Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
Pharmacoecon Open ; 8(3): 493-505, 2024 May.
Article in English | MEDLINE | ID: mdl-38528312

ABSTRACT

BACKGROUND: Major depressive disorder (MDD) is a common, often recurrent condition and a significant driver of healthcare costs. People with MDD often receive pharmacological therapy as the first-line treatment, but the majority of people require more than one medication trial to find one that relieves symptoms without causing intolerable side effects. There is an acute need for more effective interventions to improve patients' remission and quality of life and reduce the condition's economic burden on the healthcare system. Pharmacogenomic (PGx) testing could deliver these objectives, using genomic information to guide prescribing decisions. With an already complex and multifaceted care pathway for MDD, future evaluations of new treatment options require a flexible analytic infrastructure encompassing the entire care pathway. Individual-level simulation models are ideally suited for this purpose. We sought to develop an economic simulation model to assess the effectiveness and cost effectiveness of PGx testing for individuals with major depression. Additionally, the model serves as an analytic infrastructure, simulating the entire patient pathway for those with MDD. METHODS AND ANALYSIS: Key stakeholders, including patient partners, clinical experts, researchers, and modelers, designed and developed a discrete-time microsimulation model of the clinical pathways of adults with MDD in British Columbia (BC), including all publicly-funded treatment options and multiple treatment steps. The Simulation Model of Major Depression (SiMMDep) was coded with a modular approach to enhance flexibility. The model was populated using multiple original data analyses conducted with BC administrative data, a systematic review, and an expert panel. The model accommodates newly diagnosed and prevalent adult patients with MDD in BC, with and without PGx-guided treatment. SiMMDep comprises over 1500 parameters in eight modules: entry cohort, demographics, disease progression, treatment, adverse events, hospitalization, costs and quality-adjusted life-years (payoff), and mortality. The model predicts health outcomes and estimates costs from a health system perspective. In addition, the model can incorporate interactive decision nodes to address different implementation strategies for PGx testing (or other interventions) along the clinical pathway. We conducted various forms of model validation (face, internal, and cross-validity) to ensure the correct functioning and expected results of SiMMDep. CONCLUSION: SiMMDep is Canada's first medication-specific, discrete-time microsimulation model for the treatment of MDD. With patient partner collaboration guiding its development, it incorporates realistic care journeys. SiMMDep synthesizes existing information and incorporates provincially-specific data to predict the benefits and costs associated with PGx testing. These predictions estimate the effectiveness, cost-effectiveness, resource utilization, and health gains of PGx testing compared with the current standard of care. However, the flexible analytic infrastructure can be adapted to support other policy questions and facilitate the rapid synthesis of new data for a broader search for efficiency improvements in the clinical field of depression.

2.
BMC Health Serv Res ; 23(1): 1446, 2023 Dec 20.
Article in English | MEDLINE | ID: mdl-38124043

ABSTRACT

BACKGROUND: Major depressive disorder (MDD) is one of the world's leading causes of disability. Our purpose was to characterize the total costs of MDD and evaluate the degree to which the British Columbia provincial health system meets its objective to protect people from the financial impact of illness. METHODS: We performed a population-based cohort study of adults newly diagnosed with MDD between 2015 and 2020 and followed their health system costs over two years. The expenditure proportion of MDD-related, patient paid costs relative to non-subsistence income was estimated, incidences of financial hardship were identified and the slope index of inequality (SII) between the highest and lowest income groups compared across regions. RESULTS: There were 250,855 individuals diagnosed with MDD in British Columbia over the observation period. Costs to the health system totalled >$1.5 billion (2020 CDN), averaging $138/week for the first 12 weeks following a new diagnosis and $65/week to week 52 and $55/week for weeks 53-104 unless MDD was refractory to treatment ($125/week between week 12-52 and $101/week over weeks 53-104). The proportion of MDD-attributable costs not covered by the health system was 2-15x greater than costs covered by the health system, exceeding $700/week for patients with severe MDD or MDD that was refractory to treatment. Population members in lower-income groups and urban homeowners had disadvantages in the distribution of financial protection received by the health system (SII reached - 8.47 and 15.25, respectively); however, financial hardship and inequities were mitigated province-wide if MDD went into remission (SII - 0.07 to 0.6). CONCLUSIONS: MDD-attributable costs to health systems and patients are highest in the first 12 weeks after a new diagnosis. During this time, lower income groups and homeowners in urban areas run the risk of financial hardship.


Subject(s)
Depressive Disorder, Major , Adult , Humans , Depressive Disorder, Major/epidemiology , Depressive Disorder, Major/therapy , Cohort Studies , British Columbia/epidemiology , Depression , Health Expenditures , Health Care Costs
3.
CMAJ ; 195(44): E1499-E1508, 2023 11 14.
Article in English | MEDLINE | ID: mdl-37963621

ABSTRACT

BACKGROUND: Pharmacogenomic testing to identify variations in genes that influence metabolism of antidepressant medications can enhance efficacy and reduce adverse effects of pharmacotherapy for major depressive disorder. We sought to establish the cost-effectiveness of implementing pharmacogenomic testing to guide prescription of antidepressants. METHODS: We developed a discrete-time microsimulation model of care pathways for major depressive disorder in British Columbia, Canada, to evaluate the effectiveness and cost-effectiveness of pharmacogenomic testing from the public payer's perspective over 20 years. The model included unique patient characteristics (e.g., metabolizer phenotypes) and used estimates derived from systematic reviews, analyses of administrative data (2015-2020) and expert judgment. We estimated incremental costs, life-years and quality-adjusted life-years (QALYs) for a representative cohort of patients with major depressive disorder in BC. RESULTS: Pharmacogenomic testing, if implemented in BC for adult patients with moderate-severe major depressive disorder, was predicted to save the health system $956 million ($4926 per patient) and bring health gains of 0.064 life-years and 0.381 QALYs per patient (12 436 life-years and 74 023 QALYs overall over 20 yr). These savings were mainly driven by slowing or avoiding the transition to refractory (treatment-resistant) depression. Pharmacogenomic-guided care was associated with 37% fewer patients with refractory depression over 20 years. Sensitivity analyses estimated that costs of pharmacogenomic testing would be offset within about 2 years of implementation. INTERPRETATION: Pharmacogenomic testing to guide antidepressant use was estimated to yield population health gains while substantially reducing health system costs. These findings suggest that pharmacogenomic testing offers health systems an opportunity for a major value-promoting investment.


Subject(s)
Depressive Disorder, Major , Adult , Humans , Depressive Disorder, Major/drug therapy , Depressive Disorder, Major/genetics , Pharmacogenetics , Depression , Cost-Benefit Analysis , Antidepressive Agents/therapeutic use , Quality-Adjusted Life Years , British Columbia
4.
Can J Public Health ; 113(5): 653-664, 2022 10.
Article in English | MEDLINE | ID: mdl-35834166

ABSTRACT

OBJECTIVES: To determine the extent and characteristics of in-school transmission of SARS-CoV-2 and determine risk factors for in-school acquisition of COVID-19 in one of Canada's largest school districts. METHODS: We conducted a retrospective chart review of all reportable cases of COVID-19 who attended a kindergarten-Grade 12 (K-12) school within the study area between January and June of the 2020-2021 school year. The acquisition source was inferred based on epidemiological data and, when available, whole genome sequencing results. Mixed effects logistic regression was performed to identify risk factors independently associated with in-school acquisition of COVID-19. RESULTS: Overall, 2877 cases of COVID-19 among staff and students were included in the analysis; of those, 9.1% had evidence of in-school acquisition. The median cluster size was two cases (interquartile range: 1). Risk factors for in-school acquisition included being male (adjusted odds ratio [aOR]: 1.59, 95% confidence interval [CI]: 1.17-2.17), being a staff member (aOR: 2.62, 95% CI: 1.64-4.21) and attending or working in an independent school (aOR: 2.28, 95% CI: 1.13-4.62). CONCLUSION: In-school acquisition of COVID-19 was uncommon during the study period. Risk factors were identified in order to support the implementation of mitigation strategies that can reduce transmission further.


RéSUMé: OBJECTIFS: Déterminer l'étendue et les caractéristiques de la transmission de la SRAS-CoV-2 en milieu scolaire, et déterminer les facteurs de risque de l'acquisition de la COVID-19 dans l'un des plus larges arrondissements scolaires du Canada. MéTHODES: Nous avons mené un examen rétrospectif des dossiers de tous les cas signalés de COVID-19 ayant fréquenté une école de niveau élémentaire, primaire ou secondaire dans la zone à l'étude entre janvier et juin de l'année scolaire 2020-2021. La source d'acquisition était inférée sur la base des données épidémiologiques et, lorsque disponibles, les résultats de séquençage du génome entier. Nous avons eu recours à des régressions logistiques multiniveaux pour identifier les facteurs indépendamment associés avec l'acquisition de la COVID-19 en milieu scolaire. RéSULTATS: Au total, 2 877 cas de COVID-19 parmi les employés et les élèves ont été inclus dans l'analyse; de ceux-ci, 9,1 % avaient acquis l'infection en milieu scolaire. La grosseur médiane des agrégats était de deux cas (écart interquartile : 1). Les risques facteurs de l'acquisition en milieu scolaire incluaient le fait d'être de sexe masculin (rapport de cotes ajusté [RCa] : 1,59, intervalle de confiance [IC] de 95% : 1,17-2,17), être un membre du personnel (RCa : 2,62, IC de 95% : 1,64-4,21) et fréquenter ou travailler dans une école indépendante (RCa : 2,28, IC de 95% : 1,13-4,62). CONCLUSION: Nos résultats suggèrent que l'acquisition de la COVID-19 en milieu scolaire était peu commune pendant la période d'étude. Des facteurs de risque ont été identifiés afin de supporter l'implémentation de mesures de contrôle pouvant réduire davantage la transmission.


Subject(s)
COVID-19 , SARS-CoV-2 , British Columbia/epidemiology , COVID-19/epidemiology , Female , Humans , Male , Retrospective Studies , Schools
5.
Ann Am Thorac Soc ; 19(7): 1102-1111, 2022 07.
Article in English | MEDLINE | ID: mdl-35007497

ABSTRACT

Rationale: Cardiovascular disease accounts for one-third of deaths in patients with chronic obstructive pulmonary disease (COPD). Better control of cardiovascular risk factors in primary care could improve outcomes. Objectives: To define the prevalence, monitoring, treatment, and control of risk factors in patients with COPD. Methods: Repeated cross-sectional analysis of primary care electronic medical records for all patients with COPD in the Canadian Primary Care Sentinel Surveillance Network from 2013 to 2018 (n = 32,695 in 2018). A control group was matched 1:1 for age, sex, and rural residence (n = 32,638 in 2018). Five risk factors were defined using validated definitions including laboratory results: hypertension, dyslipidemia, diabetes, obesity, and smoking. Results: All risk factors were more common in patients with COPD compared with matched control subjects, including hypertension (52.3% vs. 44.9%), dyslipidemia (62.0% vs. 57.8%), diabetes (25.0% vs. 20.2%), obesity (40.8% vs. 36.8%), and smoking (40.9% vs. 11.4%), respectively. The mean Framingham risk score was 20.6% versus 18.6%, with 53.8% of patients with COPD being high risk (⩾20%). Monitoring of risk factors within the last year in patients with COPD in 2018 was suboptimal: 71.8% hypertension, 39.4% dyslipidemia, 74.5% diabetes, 52.3% obesity. Smoking status was infrequently recorded in the electronic record. In those monitored, guideline recommended targets were achieved in 60.8%, 46.6%, 57.4%, 10.6% and 12.0% for each risk factor. Cardiovascular therapies including angiotensin-converting enzyme inhibitors (69%), statins (69%), and smoking cessation therapies (27%) were underused. Conclusions: In patients with COPD, major cardiovascular risk factors are common, yet inadequately monitored, undertreated, and poorly controlled. Strategies are needed to improve comprehensive risk factor management proven to reduce cardiovascular morbidity and mortality.


Subject(s)
Cardiovascular Diseases , Dyslipidemias , Hypertension , Pulmonary Disease, Chronic Obstructive , Canada/epidemiology , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/prevention & control , Cross-Sectional Studies , Heart Disease Risk Factors , Humans , Hypertension/epidemiology , Obesity/complications , Obesity/epidemiology , Pulmonary Disease, Chronic Obstructive/epidemiology , Risk Factors
7.
J Clin Virol ; 142: 104914, 2021 09.
Article in English | MEDLINE | ID: mdl-34304088

ABSTRACT

BACKGROUND: SARS-CoV-2 antibody testing is required for estimating population seroprevalence and vaccine response studies. It may also increase case identification when used as an adjunct to routine molecular testing. We performed a validation study and evaluated the use of automated high-throughput assays in a field study of COVID-19-affected care facilities. METHODS: Six automated assays were assessed: 1) DiaSorin LIAISONTM SARS-CoV-2 S1/S2 IgG; 2) Abbott ARCHITECTTM SARS-CoV-2 IgG; 3) Ortho VITROSTM Anti-SARS-CoV-2 Total; 4) VITROSTM Anti-SARS-CoV-2 IgG; 5) Siemens SARS-CoV-2 Total Assay; and 6) Roche ElecsysTM Anti-SARS-CoV-2. The validation study included 107 samples (42 known positive; 65 presumed negative). The field study included 296 samples (92 PCR positive; 204 PCR negative or not PCR tested). All samples were tested by the six assays. RESULTS: All assays had sensitivities >90% in the field study, while in the validation study, 5/6 assays were >90% sensitive and DiaSorin was 79% sensitive. Specificities and negative predictive values were >95% for all assays. Field study estimated positive predictive values at 1-10% disease prevalence were 100% for Siemens, Abbott and Roche, while DiaSorin and Ortho assays had lower PPVs at 1% prevalence, but PPVs increased at 5-10% prevalence. In the field study, addition of serology increased diagnoses by 16% compared to PCR testing alone. CONCLUSIONS: All assays evaluated in this study demonstrated high sensitivity and specificity for samples collected at least 14 days post-symptom onset, while sensitivity was variable 0-14 days after infection. The addition of serology to the outbreak investigations increased case detection by 16%.


Subject(s)
COVID-19 , SARS-CoV-2 , Antibodies, Viral , British Columbia , Humans , Immunoassay , Sensitivity and Specificity , Seroepidemiologic Studies
8.
CMAJ Open ; 9(2): E376-E383, 2021.
Article in English | MEDLINE | ID: mdl-33863795

ABSTRACT

BACKGROUND: Heart failure (HF) poses a substantial global health burden, particularly in patients with chronic obstructive pulmonary disease (COPD). The objective of this study was to validate an electronic medical record-based definition of HF in patients with COPD in primary care practices in the province of British Columbia, Canada. METHODS: We conducted a cross-sectional retrospective chart review from Sept. 1, 2018, to Dec. 31, 2018, for a cohort of patients from primary care practices in BC whose physicians were recruited through the BC node of the Canadian Primary Care Sentinel Surveillance Network. Heart failure case definitions were developed by combining diagnostic codes, medication information and laboratory values available in primary care electronic medical records. These were compared with HF diagnoses identified through detailed chart review as the gold standard. Sensitivity, specificity, negative (NPV) and positive predictive values (PPV) were calculated for each definition. RESULTS: Charts of 311 patients with COPD were reviewed, of whom 72 (23.2%) had HF. Five categories of definitions were constructed, all of which had appropriate sensitivity, specificity and NPV. The optimal case definition consisted of 1 HF billing code or a specific combination of medications for HF. This definition had an excellent specificity (93.3%, 95% confidence interval [CI] 89.4%-96.1%), sensitivity (90.3%, 95% CI 81.0%-96.0%), PPV (80.2%, 95% CI 69.9%-88.3%) and NPV (97.0%, 95% CI 93.8%-98.8%). INTERPRETATION: This comprehensive case definition improves upon previous primary care HF definitions to include medication codes and laboratory data, along with previously used billing codes. A case definition for HF was derived and validated and can be used with data from electronic medical records to identify HF in patients with COPD in primary care accurately.


Subject(s)
Heart Failure , Primary Health Care , Pulmonary Disease, Chronic Obstructive , British Columbia/epidemiology , Clinical Laboratory Information Systems/statistics & numerical data , Cross-Sectional Studies , Databases, Pharmaceutical/statistics & numerical data , Electronic Health Records/statistics & numerical data , Female , Health Information Systems/organization & administration , Heart Failure/diagnosis , Heart Failure/epidemiology , Heart Failure/therapy , Humans , International Classification of Diseases , Male , Middle Aged , Predictive Value of Tests , Primary Health Care/methods , Primary Health Care/organization & administration , Primary Health Care/standards , Pulmonary Disease, Chronic Obstructive/diagnosis , Pulmonary Disease, Chronic Obstructive/epidemiology , Pulmonary Disease, Chronic Obstructive/therapy , Quality Improvement , Retrospective Studies , Sensitivity and Specificity , Sentinel Surveillance
9.
Am J Infect Control ; 49(8): 978-984, 2021 08.
Article in English | MEDLINE | ID: mdl-33762181

ABSTRACT

BACKGROUND: Long-term care facilities across Canada have been disproportionately affected by the COVID-19 pandemic. This study aims to describe the experiences of frontline workers and leaders involved in COVID-19 outbreak management in these facilities, identify best practices, and provide recommendations for improvement. METHODS: This is a qualitative study using key informant, semi-structured interviews. Key informants were defined as individuals with direct experience managing COVID-19 outbreaks in long-term care. Thematic content analysis of interview transcripts identified key themes important for outbreak management. RESULTS: Twenty-three interviews were conducted with key informants from the following categories: public health, health authority leadership for long-term care, infection prevention and control, long-term care operators, and frontline staff. Eight themes were identified as critical factors for outbreak management on thematic analysis, which included: (1) early identification of cases, (2) the suite of public health interventions implemented, (3) external support and assistance, (4) staff training and education, (5) personal protective equipment use and supply, (6) workplace culture, organizational leadership and management, (7) coordination and communication, and (8) staffing. CONCLUSIONS: Best practices and areas for improvement in outbreak response identified in this study can help to inform policy and practice to reduce the impact of COVID-19 in these settings.


Subject(s)
COVID-19 , Pandemics , Canada , Disease Outbreaks , Humans , Long-Term Care , Qualitative Research , SARS-CoV-2
10.
Open Forum Infect Dis ; 8(3): ofab043, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33723509

ABSTRACT

A comparison of rapid point-of-care serology tests using finger prick and venous blood was done on 278 participants. In a laboratory setting, immunoglobulin G (IgG) sensitivity neared 100%; however, IgG sensitivity dramatically dropped (82%) in field testing. Possible factors include finger prick volume variability, hemolysis, cassette readability, and operator training.

11.
Infect Control Hosp Epidemiol ; 42(10): 1181-1188, 2021 Oct.
Article in English | MEDLINE | ID: mdl-33397533

ABSTRACT

OBJECTIVE: A Canadian health authority implemented a multisectoral intervention designed to control severe acute respiratory coronavirus virus 2 (SARS-CoV-2) transmission during long-term care facility (LTCF) outbreaks. The primary objective was to evaluate the effectiveness of the intervention 14 days after implementation. DESIGN: Quasi-experimental, segmented regression analysis. INTERVENTION: A series of outbreak measures classified into 4 categories: case and contact management, proactive case detection, rigorous infection control practices and resource prioritization and stewardship. METHODS: A mixed-effects segmented Poisson regression model was fitted to the incidence rate of coronavirus disease 2019 (COVID-19), calculated every 2 days, within each facility and case type (staff vs residents). For each facility, the outbreak time period was segmented into an early outbreak period (within 14 days of the intervention) and postintervention period (beyond 14 days following the intervention). Model outputs quantified COVID-19 incidence trend and rate changes between these 2 periods. A secondary model was constructed to identify effect modification by case type. RESULTS: The significant upward trend in COVID-19 incidence rate during the early outbreak period (rate ratio [RR], 1.07; 95% confidence interval [CI], 1.03-1.11; P < .001) reversed during the postintervention period (RR, 0.73; 95% CI, 0.67-0.80; P < .001). The average trend did not differ by case type during the early outbreak period (P > .05) or the postintervention period (P > .05). However, staff had a 70% larger decrease in the average rate of COVID-19 during the postintervention period than residents (RR, 0.30; 95% CI, 0.10-0.88; P < .05). CONCLUSIONS: Our study provides evidence for the effectiveness of this intervention to reduce the transmission of COVID-19 in LTCFs. This intervention can be adapted and utilized by other jurisdictions to protect the vulnerable individuals in LTCFs.


Subject(s)
COVID-19 , Long-Term Care , Canada/epidemiology , Humans , SARS-CoV-2 , Skilled Nursing Facilities
12.
Front Immunol ; 12: 775420, 2021.
Article in English | MEDLINE | ID: mdl-35046939

ABSTRACT

Background: As part of the public health outbreak investigations, serological surveys were carried out following two COVID-19 outbreaks in April 2020 and October 2020 in one long term care facility (LTCF) in British Columbia, Canada. This study describes the serostatus of the LTCF residents and monitors changes in their humoral response to SARS-CoV-2 and other human coronaviruses (HCoV) over seven months. Methods: A total of 132 serum samples were collected from all 106 consenting residents (aged 54-102) post-first outbreak (N=87) and post-second outbreak (N=45) in one LTCF; 26/106 participants provided their serum following both COVID-19 outbreaks, permitting longitudinal comparisons between surveys. Health-Canada approved commercial serologic tests and a pan-coronavirus multiplexed immunoassay were used to evaluate antibody levels against the spike protein, nucleocapsid, and receptor binding domain (RBD) of SARS-CoV-2, as well as the spike proteins of HCoV-229E, HCoV-HKU1, HCoV-NL63, and HCoV-OC43. Statistical analyses were performed to describe the humoral response to SARS-CoV-2 among residents longitudinally. Findings: Survey findings demonstrated that among the 26 individuals that participated in both surveys, all 10 individuals seropositive after the first outbreak continued to be seropositive following the second outbreak, with no reinfections identified among them. SARS-CoV-2 attack rate in the second outbreak was lower (28.6%) than in the first outbreak (40.2%), though not statistically significant (P>0.05). Gradual waning of anti-nucleocapsid antibodies to SARS-CoV-2 was observed on commercial (median Δ=-3.7, P=0.0098) and multiplexed immunoassay (median Δ=-169579, P=0.014) platforms; however, anti-spike and anti-receptor binding domain (RBD) antibodies did not exhibit a statistically significant decline over 7 months. Elevated antibody levels for beta-HCoVs OC43 (P<0.0001) and HKU1 (P=0.0027) were observed among individuals seropositive for SARS-CoV-2 compared to seronegative individuals. Conclusion: Our study utilized well-validated serological platforms to demonstrate that humoral responses to SARS-CoV-2 persisted for at least 7 months. Elevated OC43 and HKU1 antibodies among SARS-CoV-2 seropositive individuals may be attributed to cross reaction and/or boosting of humoral response.


Subject(s)
Antibodies, Viral/blood , COVID-19/blood , Disease Outbreaks , Long-Term Care , SARS-CoV-2/metabolism , Aged , Aged, 80 and over , COVID-19/epidemiology , Canada , Female , Humans , Male , Time Factors
13.
Am J Infect Control ; 49(5): 649-652, 2021 05.
Article in English | MEDLINE | ID: mdl-33086096

ABSTRACT

A cross-sectional serological survey was carried out in two long-term care facilities that experienced COVID-19 outbreaks in order to evaluate current clinical COVID-19 case definitions. Among individuals with a negative or no previous COVID-19 diagnostic test, myalgias, headache, and loss of appetite were associated with serological reactivity. The US CDC probable case definition was also associated with seropositivity. Public health and infection control practitioners should consider these findings for case exclusion in outbreak settings.


Subject(s)
COVID-19 Serological Testing , COVID-19/diagnosis , Disease Outbreaks/prevention & control , Infection Control , SARS-CoV-2 , Adult , Aged , Aged, 80 and over , British Columbia/epidemiology , COVID-19/epidemiology , COVID-19/prevention & control , Cross-Sectional Studies , Female , Health Policy , Humans , Long-Term Care , Male , Middle Aged , Public Health , SARS-CoV-2/isolation & purification
14.
Ir J Med Sci ; 189(1): 337-339, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31338690

ABSTRACT

BACKGROUND: Adults ageing with HIV and on antiretroviral therapy have a greater burden of chronic diseases compared with adults without HIV as reported by Althoff et al. (Curr Opin HIV AIDS 11:527-36, 2016). Therefore, it is important in this clinically stable HIV+ population to monitor and evaluate their risk of chronic kidney disease and intervene when appropriate. The European AIDS Clinical Society (EACS) advise that yearly screening for CKD with eGFR calculation and spot urine protein measurements should be performed (European AIDS Clinical Society Guidelines 2018). The Centre for Excellence for Health, Immunity and Infection (CHIP) have created a validated study calculator to estimate a patient's risk for CKD as reported by Mocroft et al. (PLoS Med 12(3):e1001809, 2015). AIMS: (1) To determine the proportion of patients who had a urinary protein-creatinine ratio checked in 2018; (2) To calculate an eGFR for each patient in our cohort utilizing the Modification of Diet in Renal Disease (MDRD) calculation; (3) To calculate the full chronic kidney disease score in our cohort of patients. METHODS: We undertook a retrospective chart review of 80 HIV-positive patients who attended our weekly clinic in Beaumont Hospital, Dublin, Ireland. RESULTS: In our subset of 31 patients who had all the requirements to estimate their eGFR and full chronic kidney disease risk score, 100% (31/31) of eGFRs calculated were reported as > 90 mL/min/1.73 m2. The median eGFR was 215 mL/min/1.73 m2 (range 95.69-418.08 mL/min/1.73 m2). The average CHIP full chronic kidney disease 5-year risk score for patients developing CKD was 0.91% (95% CI 0.60-1.21%). One patient was identified with a risk score of 5.05% as they had suffered an acute coronary syndrome event in the past. CONCLUSION: Although this audit was small and with limitations, it highlights the importance of collecting relevant and accurate patient data annually to estimate and mitigate the risk of chronic kidney disease in patients with HIV.


Subject(s)
Cancer Survivors/statistics & numerical data , HIV Infections/therapy , Chronic Disease , Female , HIV Infections/mortality , Humans , Male , Middle Aged , Retrospective Studies
16.
BMJ Open ; 7(11): e017604, 2017 Nov 03.
Article in English | MEDLINE | ID: mdl-29101138

ABSTRACT

OBJECTIVE: Digital innovations with internet/mobile phones offer a potential cost-saving solution for overburdened health systems with high service delivery costs to improve efficiency of HIV/STI (sexually transmitted infections) control initiatives. However, their overall evidence has not yet been appraised. We evaluated the feasibility and impact of all digital innovations for all HIV/STIs. DESIGN: Systematic review. SETTING/PARTICIPANTS: All settings/all participants. INTERVENTION: We classified digital innovations into (1) mobile health-based (mHealth: SMS (short message service)/phone calls), (2) internet-based mobile and/or electronic health (mHealth/eHealth: social media, avatar-guided computer programs, websites, mobile applications, streamed soap opera videos) and (3) combined innovations (included both SMS/phone calls and internet-based mHealth/eHealth). PRIMARY AND SECONDARY OUTCOME MEASURES: Feasibility, acceptability, impact. METHODS: We searched databases MEDLINE via PubMed, Embase, Cochrane CENTRAL and Web of Science, abstracted data, explored heterogeneity, performed a random effects subgroup analysis. RESULTS: We reviewed 99 studies, 63 (64%) were from America/Europe, 36 (36%) from Africa/Asia; 79% (79/99) were clinical trials; 84% (83/99) evaluated impact. Of innovations, mHealth based: 70% (69/99); internet based: 21% (21/99); combined: 9% (9/99).All digital innovations were highly accepted (26/31; 84%), and feasible (20/31; 65%). Regarding impacted measures, mHealth-based innovations (SMS) significantly improved antiretroviral therapy (ART) adherence (pooled OR=2.15(95%CI: 1.18 to 3.91)) and clinic attendance rates (pooled OR=1.76(95%CI: 1.28, 2.42)); internet-based innovations improved clinic attendance (6/6), ART adherence (4/4), self-care (1/1), while reducing risk (5/5); combined innovations increased clinic attendance, ART adherence, partner notifications and self-care. Confounding (68%) and selection bias (66%) were observed in observational studies and attrition bias in 31% of clinical trials. CONCLUSION: Digital innovations were acceptable, feasible and generated impact. A trend towards the use of internet-based and combined (internet and mobile) innovations was noted. Large scale-up studies of high quality, with new integrated impact metrics, and cost-effectiveness are needed. Findings will appeal to all stakeholders in the HIV/STI global initiatives space.


Subject(s)
Cell Phone , HIV Infections/drug therapy , Sexually Transmitted Diseases/drug therapy , Telemedicine/methods , Text Messaging , Antiretroviral Therapy, Highly Active , Humans , Medication Adherence , Randomized Controlled Trials as Topic , Self Care/methods
17.
Point Care ; 16(4): 141-150, 2017 Dec.
Article in English | MEDLINE | ID: mdl-29333105

ABSTRACT

OBJECTIVE: Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to (a) catalog feasibility measures/metrics and (b) propose a framework. METHODS: For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. FINDINGS: We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. CONCLUSIONS: Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization.

18.
PLoS One ; 11(2): e0149592, 2016.
Article in English | MEDLINE | ID: mdl-26891218

ABSTRACT

INTRODUCTION: Fourth generation (Ag/Ab combination) point of care HIV tests like the FDA-approved Determine HIV1/2 Ag/Ab Combo test offer the promise of timely detection of acute HIV infection, relevant in the context of HIV control. However, a synthesis of their performance has not yet been done. In this meta-analysis we not only assessed device performance but also evaluated the role of study quality on diagnostic accuracy. METHODS: Two independent reviewers searched seven databases, including conferences and bibliographies, and independently extracted data from 17 studies. Study quality was assessed with QUADAS-2. Data on sensitivity and specificity (overall, antigen, and antibody) were pooled using a Bayesian hierarchical random effects meta-analysis model. Subgroups were analyzed by blood samples (serum/plasma vs. whole blood) and study designs (case-control vs. cross-sectional). RESULTS: The overall specificity of the Determine Combo test was 99.1%, 95% credible interval (CrI) [97.3-99.8]. The overall pooled sensitivity for the device was at 88.5%, 95% [80.1-93.4]. When the components of the test were analyzed separately, the pooled specificities were 99.7%, 95% CrI [96.8-100] and 99.6%, 95% CrI [99.0-99.8], for the antigen and antibody components, respectively. Pooled sensitivity of the antibody component was 97.3%, 95% CrI [60.7-99.9], and pooled sensitivity for the antigen component was found to be 12.3%, 95% (CrI) [1.1-44.2]. No significant differences were found between subgroups by blood sample or study design. However, it was noted that many studies restricted their study sample to p24 antigen or RNA positive specimens, which may have led to underestimation of overall test performance. Detection bias, selection (spectrum) bias, incorporation bias, and verification bias impaired study quality. CONCLUSIONS: Although the specificity of all test components was high, antigenic sensitivity will merit from an improvement. Besides the accuracy of the device itself, study quality, also impacts the performance of the test. These factors must be kept in mind in future evaluations of an improved device, relevant for global scale up and implementation.


Subject(s)
HIV Infections/diagnosis , Point-of-Care Testing , Bayes Theorem , HIV Seronegativity , HIV Seropositivity/diagnosis , Humans , Quality Assurance, Health Care , Sensitivity and Specificity
19.
Point Care ; 14(3): 81-87, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26366129

ABSTRACT

Implementation of human immunodeficiency virus rapid and point-of-care tests (RDT/POCT) is understood to be impeded by many different factors that operate at 4 main levels-test devices, patients, providers, and health systems-yet a knowledge gap exists of how they act and interact to impede implementation. To fill this gap, and with a view to improving the quality of implementation, we conducted a systematic review. METHODS: Five databases were searched, 16,672 citations were retrieved, and data were abstracted on 132 studies by 2 reviewers. FINDINGS: Across 3 levels (ie, patients, providers, and health systems), a majority (59%, 112/190) of the 190 barriers were related to the integration of RDT/POCT, followed by test-device-related concern (ie, accuracy) at 41% (78/190). At the patient level, a lack of awareness about tests (15/54, 28%) and time taken to test (12/54, 22%) dominated. At the provider and health system levels, integration of RDT/POCT in clinical workflows (7/24, 29%) and within hospitals (21/34, 62%) prevailed. Accuracy (57/78, 73%) was dominant only at the device level. INTERPRETATION: Integration barriers dominated the findings followed by test accuracy. Although accuracy has improved during the years, an ideal implementation could be achieved by improving the integration of RDT/POCT within clinics, hospitals, and health systems, with clear protocols, training on quality assurance and control, clear communication, and linkage plans to improve health outcomes of patients. This finding is pertinent for a future envisioned implementation and global scale-up of RDT/POCT-based initiatives.

SELECTION OF CITATIONS
SEARCH DETAIL
...